59 research outputs found

    Time-locked perceptual fading induced by visual transients

    Get PDF
    After prolonged fixation, a stationary object placed in the peripheral visual field fades and disappears from our visual awareness, especially at low luminance contrast (the Troxler effect). Here, we report that similar fading can be triggered by visual transients, such as additional visual stimuli flashed near the object, apparent motion, or a brief removal of the object itself (blinking). The fading occurs even without prolonged adaptation and is time-locked to the presentation of the visual transients. Experiments show that the effect of a flashed object decreased monotonically as a function of the distance from the target object. Consistent with this result, when apparent motion, consisting of a sequence of flashes was presented between stationary disks, these target disks perceptually disappeared as if erased by the moving object. Blinking the target disk, instead of flashing an additional visual object, turned out to be sufficient to induce the fading. The effect of blinking peaked around a blink duration of 80 msec. Our findings reveal a unique mechanism that controls the visibility of visual objects in a spatially selective and time-locked manner in response to transient visual inputs. Possible mechanisms underlying this phenomenon will be discussed

    Generic decoding of seen and imagined objects using hierarchical visual features

    Get PDF
    Object recognition is a key function in both human and machine vision. While recent studies have achieved fMRI decoding of seen and imagined contents, the prediction is limited to training examples. We present a decoding approach for arbitrary objects, using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features including those from a convolutional neural network can be predicted from fMRI patterns and that greater accuracy is achieved for low/high-level features with lower/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, the decoding of imagined objects reveals progressive recruitment of higher to lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval

    Attention modulates neural representation to render reconstructions according to subjective appearance

    Get PDF
    Stimulus images can be reconstructed from visual cortical activity. However, our perception of stimuli is shaped by both stimulus-induced and top-down processes, and it is unclear whether and how reconstructions reflect top-down aspects of perception. Here, we investigate the effect of attention on reconstructions using fMRI activity measured while subjects attend to one of two superimposed images. A state-of-the-art method is used for image reconstruction, in which brain activity is translated (decoded) to deep neural network (DNN) features of hierarchical layers then to an image. Reconstructions resemble the attended rather than unattended images. They can be modeled by superimposed images with biased contrasts, comparable to the appearance during attention. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses, modulating neural representations to render reconstructions in accordance with subjective appearance

    Inter-subject neural code converter for visual image representation.

    Get PDF
    Brain activity patterns differ from person to person, even for an identical stimulus. In functional brain mapping studies, it is important to align brain activity patterns between subjects for group statistical analyses. While anatomical templates are widely used for inter-subject alignment in functional magnetic resonance imaging (fMRI) studies, they are not sufficient to identify the mapping between voxel-level functional responses representing specific mental contents. Recent work has suggested that statistical learning methods could be used to transform individual brain activity patterns into a common space while preserving representational contents. Here, we propose a flexible method for functional alignment, "neural code converter, " which converts one subject's brain activity pattern into another's representing the same content. The neural code converter was designed to learn statistical relationships between fMRI activity patterns of paired subjects obtained while they saw an identical series of stimuli. It predicts the signal intensity of individual voxels of one subject from a pattern of multiple voxels of the other subject. To test this method, we used fMRI activity patterns measured while subjects observed visual images consisting of random and structured patches. We show that fMRI activity patterns for visual images not used for training the converter could be predicted from those of another subject where brain activity was recorded for the same stimuli. This confirms that visual images can be accurately reconstructed from the predicted activity patterns alone. Furthermore, we show that a classifier trained only on predicted fMRI activity patterns could accurately classify measured fMRI activity patterns. These results demonstrate that the neural code converter can translate neural codes between subjects while preserving contents related to visual images. While this method is useful for functional alignment and decoding, it may also provide a basis for brain-to-brain communication using the converted pattern for designing brain stimulation

    End-to-End Deep Image Reconstruction From Human Brain Activity

    Get PDF
    Deep neural networks (DNNs) have recently been applied successfully to brain decoding and image reconstruction from functional magnetic resonance imaging (fMRI) activity. However, direct training of a DNN with fMRI data is often avoided because the size of available data is thought to be insufficient for training a complex network with numerous parameters. Instead, a pre-trained DNN usually serves as a proxy for hierarchical visual representations, and fMRI data are used to decode individual DNN features of a stimulus image using a simple linear model, which are then passed to a reconstruction module. Here, we directly trained a DNN model with fMRI data and the corresponding stimulus images to build an end-to-end reconstruction model. We accomplished this by training a generative adversarial network with an additional loss term that was defined in high-level feature space (feature loss) using up to 6,000 training data samples (natural images and fMRI responses). The above model was tested on independent datasets and directly reconstructed image using an fMRI pattern as the input. Reconstructions obtained from our proposed method resembled the test stimuli (natural and artificial images) and reconstruction accuracy increased as a function of training-data size. Ablation analyses indicated that the feature loss that we employed played a critical role in achieving accurate reconstruction. Our results show that the end-to-end model can learn a direct mapping between brain activity and perception

    Characterization of deep neural network features by decodability from human brain activity

    Get PDF
    Achievements of near human-level performance in object recognition by deep neural networks (DNNs) have triggered a flood of comparative studies between the brain and DNNs. Using a DNN as a proxy for hierarchical visual representations, our recent study found that human brain activity patterns measured by functional magnetic resonance imaging (fMRI) can be decoded (translated) into DNN feature values given the same inputs. However, not all DNN features are equally decoded, indicating a gap between the DNN and human vision. Here, we present a dataset derived from DNN feature decoding analyses, which includes fMRI signals of five human subjects during image viewing, decoded feature values of DNNs (AlexNet and VGG19), and decoding accuracies of individual DNN features with their rankings. The decoding accuracies of individual features were highly correlated between subjects, suggesting the systematic differences between the brain and DNNs. We hope the present dataset will contribute to revealing the gap between the brain and DNNs and provide an opportunity to make use of the decoded features for further applications

    Inter-individual deep image reconstruction via hierarchical neural code conversion

    Get PDF
    The sensory cortex is characterized by general organizational principles such as topography and hierarchy. However, measured brain activity given identical input exhibits substantially different patterns across individuals. Although anatomical and functional alignment methods have been proposed in functional magnetic resonance imaging (fMRI) studies, it remains unclear whether and how hierarchical and fine-grained representations can be converted between individuals while preserving the encoded perceptual content. In this study, we trained a method of functional alignment called neural code converter that predicts a target subject’s brain activity pattern from a source subject given the same stimulus, and analyzed the converted patterns by decoding hierarchical visual features and reconstructing perceived images. The converters were trained on fMRI responses to identical sets of natural images presented to pairs of individuals, using the voxels on the visual cortex that covers from V1 through the ventral object areas without explicit labels of the visual areas. We decoded the converted brain activity patterns into the hierarchical visual features of a deep neural network using decoders pre-trained on the target subject and then reconstructed images via the decoded features. Without explicit information about the visual cortical hierarchy, the converters automatically learned the correspondence between visual areas of the same levels. Deep neural network feature decoding at each layer showed higher decoding accuracies from corresponding levels of visual areas, indicating that hierarchical representations were preserved after conversion. The visual images were reconstructed with recognizable silhouettes of objects even with relatively small numbers of data for converter training. The decoders trained on pooled data from multiple individuals through conversions led to a slight improvement over those trained on a single individual. These results demonstrate that the hierarchical and fine-grained representation can be converted by functional alignment, while preserving sufficient visual information to enable inter-individual visual image reconstruction

    Categorical discrimination of human body parts by magnetoencephalography

    Get PDF
    Humans recognize body parts in categories. Previous studies have shown that responses in the fusiform body area (FBA) and extrastriate body area (EBA) are evoked by the perception of the human body, when presented either as whole or as isolated parts. These responses occur approximately 190 ms after body images are visualized. The extent to which body-sensitive responses show specificity for different body part categories remains to be largely clarified. We used a decoding method to quantify neural responses associated with the perception of different categories of body parts. Nine subjects underwent measurements of their brain activities by magnetoencephalography (MEG) while viewing 14 images of feet, hands, mouths, and objects. We decoded categories of the presented images from the MEG signals using a support vector machine (SVM) and calculated their accuracy by 10-fold cross-validation. For each subject, a response that appeared to be a body-sensitive response was observed and the MEG signals corresponding to the three types of body categories were classified based on the signals in the occipitotemporal cortex. The accuracy in decoding body- part categories (with a peak at approximately 48%) was above chance (33.3%) and significantly higher than that for random categories. According to the time course and location, the responses are suggested to be body-sensitive and to include information regarding the body-part category. Finally, this non-invasive method can decode category information of a visual object with high temporal and spatial resolution and this result may have a significant impact in the field of brain-machine interface research

    Circadian Gene Circuitry Predicts Hyperactive Behavior in a Mood Disorder Mouse Model

    Get PDF
    SummaryBipolar disorder, also known as manic-depressive illness, causes swings in mood and activity levels at irregular intervals. Such changes are difficult to predict, and their molecular basis remains unknown. Here, we use infradian (longer than a day) cyclic activity levels in αCaMKII (Camk2a) mutant mice as a proxy for such mood-associated changes. We report that gene-expression patterns in the hippocampal dentate gyrus could retrospectively predict whether the mice were in a state of high or low locomotor activity (LA). Expression of a subset of circadian genes, as well as levels of cAMP and pCREB, possible upstream regulators of circadian genes, were correlated with LA states, suggesting that the intrinsic molecular circuitry changes concomitant with infradian oscillatory LA. Taken together, these findings shed light onto the molecular basis of how irregular biological rhythms and behavior are controlled by the brain

    BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial

    Get PDF
    Objective: To determine whether training with a brain–computer interface (BCI) to control an image of a phantom hand, which moves based on cortical currents estimated from magnetoencephalographic signals, reduces phantom limb pain. Methods: Twelve patients with chronic phantom limb pain of the upper limb due to amputation or brachial plexus root avulsion participated in a randomized single-blinded crossover trial. Patients were trained to move the virtual hand image controlled by the BCI with a real decoder, which was constructed to classify intact hand movements from motor cortical currents, by moving their phantom hands for 3 days (“real training”). Pain was evaluated using a visual analogue scale (VAS) before and after training, and at follow-up for an additional 16 days. As a control, patients engaged in the training with the same hand image controlled by randomly changing values (“random training”). The 2 trainings were randomly assigned to the patients. This trial is registered at UMIN-CTR (UMIN000013608). Results: VAS at day 4 was significantly reduced from the baseline after real training (mean [SD], 45.3 [24.2]–30.9 [20.6], 1/100 mm; p = 0.009 0.025). Compared to VAS at day 1, VAS at days 4 and 8 was significantly reduced by 32% and 36%, respectively, after real training and was significantly lower than VAS after random training (p < 0.01). Conclusion: Three-day training to move the hand images controlled by BCI significantly reduced pain for 1 week. Classification of evidence: This study provides Class III evidence that BCI reduces phantom limb pain
    corecore